Non-monotonic Learning
نویسندگان
چکیده
This paper addresses methods of specialising rst-order theories within the context of incremental learning systems. We demonstrate the shortcomings of existing rst-order incremental learning systems with regard to their specialisation mechanisms. We prove that these shortcomings are fundamental to the use of classical logic. In particular, minimal \correct-ing" specialisations are not always obtainable within this framework. We propose instead the adoption of a specialisation scheme based on an existing non-monotonic logic formalism. This approach overcomes the problems that arise with incremental learning systems which employ classical logic. As a side-eeect of the formal proofs developed for this paper we deene a function called \deriv" which turns out to be an improvement on an existing explanation-based-generalisation (EBG) algorithm. Prolog code and a description of the relationship between \deriv" and the previous EBG algorithm are described in an appendix.
منابع مشابه
A Unifying Approach to Monotonic Language Learning on Informant
The present paper deals with strong-monotonic, monotonic and weak-monotonic language learning from positive and negative examples. The three notions of monotonicity re ect di erent formalizations of the requirement that the learner has to produce always better and better generalizations when fed more and more data on the concept to be learnt. We characterize strong-monotonic, monotonic, weak-mo...
متن کاملLearning agents need no induction
It has been suggested that AI investigations of mechanical learning undermine sweeping anti-inductivist views in the theory of knowledge and the philosophy of science. In particular, it is claimed that some mechanical learning systems perform epistemically justified inductive generalization and prediction. Contrary to this view, it is argued that no trace of such epistemic justification is to b...
متن کاملCharacterizations of Monotonic and Dual Monotonic Language Learning
The present paper deals with monotonic and dual monotonic language learning from positive as well as from positive and negative examples. The three notions of monotonicity re ect di erent formalizations of the requirement that the learner has to produce better and better generalizations when fed more and more data on the concept to be learned. The three versions of dual monotonicity describe th...
متن کاملLearning Action Descriptions with A-Prolog: Action Language C
This paper demonstrates how A-Prolog can be used to solve the problem of non-monotonic inductive learning in the context of the learning of the behavior of dynamic domains. Non-monotonic inductive learning is an extension of traditional inductive learning, characterized by the use of default negation in the background knowledge and/or in the clauses being learned. The importance of non-monotoni...
متن کاملPerfect Tracking of Supercavitating Non-minimum Phase Vehicles Using a New Robust and Adaptive Parameter-optimal Iterative Learning Control
In this manuscript, a new method is proposed to provide a perfect tracking of the supercavitation system based on a new two-state model. The tracking of the pitch rate and angle of attack for fin and cavitator input is of the aim. The pitch rate of the supercavitation with respect to fin angle is found as a non-minimum phase behavior. This effect reduces the speed of command pitch rate. Control...
متن کاملNon-Monotonic Sentence Alignment via Semisupervised Learning
This paper studies the problem of nonmonotonic sentence alignment, motivated by the observation that coupled sentences in real bitexts do not necessarily occur monotonically, and proposes a semisupervised learning approach based on two assumptions: (1) sentences with high affinity in one language tend to have their counterparts with similar relatedness in the other; and (2) initial alignment is...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1992